视频检索随着视觉模型的发展取得了巨大进展。但是,进一步改进这些模型需要其他标记的数据,这是一项巨大的手动努力。在本文中,我们提出了一个框架MKTVR,该框架利用了从多语言模型的知识转移来提高视频检索的性能。我们首先使用最先进的机器翻译模型来构建伪真实的多语言视频文本对。然后,我们使用这些数据来学习视频文本表示,其中英语和非英语文本查询在基于预审前的多语言模型的常见嵌入空间中表示。我们在四个英语视频检索数据集上评估了我们提出的方法,例如MSRVTT,MSVD,DIDEMO和CHARADES。实验结果表明,我们的方法在所有数据集上实现了最先进的结果,超过了先前的模型。最后,我们还在涵盖六种语言的多语言视频回程数据集上评估了我们的模型,并表明我们的模型在零拍设置中优于先前的多语言视频检索模型。
translated by 谷歌翻译
基于变压器的模型的突破不仅彻底改变了NLP字段,而且彻底改变了视觉和多模式系统。但是,尽管可视化和可解释性工具已用于NLP模型,但视觉和多模式变压器的内部机制在很大程度上仍然不透明。随着这些变压器的成功,了解它们的内部运作越来越重要,因为揭开这些黑色盒子将导致更有能力和值得信赖的模型。为了为这一任务做出贡献,我们提出了VL-Interpret,它提供了新颖的交互式可视化,以解释多模式变压器中的关注和隐藏表示。 VL解释是一种任务不可知论和集成的工具,(1)在视觉和语言组件的所有层中跟踪注意力头的各种统计数据,(2)通过易于阅读的热图和跨模式和模式的关注可视化。 (3)绘制视觉和语言令牌穿过变压器层时的隐藏表示。在本文中,我们通过分析KD-VLP(一种基于端到端的视觉视觉方式多模式变压器的模型)在视觉常识推理(VCR)和两个,两个,两个,两个,两个,两个,两个,两个,两个,两个,两个接线型VLP(VCR)的任务,两个,两个,两个,两个,两个,两个,两个,两个,两个,两个,两个vlp,两个vlp,两个vlp,两个vlp,两个,我们在本文中证明了VL解干的功能。视觉问题回答基准。此外,我们还提出了一些有关通过我们的工具学到的多模式变压器行为的有趣发现。
translated by 谷歌翻译
自我监督的视觉和语言预处理(VLP)旨在从大规模的图像文本数据中学习可转移的多模式表示形式,并在填充后在广泛的视觉范围内实现强大的表现。以前的主流VLP方法通常采用依靠外部对象检测器来编码多模式变压器框架中的图像的两步策略,该框架遭受了限制性对象概念空间,有限的图像上下文和效率低下的计算。在本文中,我们提出了一个对象感知的端到端VLP框架,该框架将来自CNN的图像网格特征直接馈送到变压器中,并共同学习多模式表示。更重要的是,我们建议执行对象知识蒸馏,以促进在不同语义级别的学习跨模式对齐。为了实现这一目标,我们通过将对象特征及其来自外部检测器的语义标签作为监督来设计两个新颖的借口任务:1。)对象引导的蒙版视觉建模任务的重点是在多模式变压器中强制执行对象感知的表示的学习; 2.)短语区域对准任务旨在通过利用语言空间中名词短语和对象标签之间的相似性来改善跨模式对齐。对各种视觉语言任务进行的广泛实验证明了我们提出的框架的功效,并且我们在现有的预科策略中实现了竞争性或优越的表现。
translated by 谷歌翻译
Diffusion models have achieved justifiable popularity by attaining state-of-the-art performance in generating realistic objects from seemingly arbitrarily complex data distributions, including when conditioning generation on labels. Unfortunately, however, their iterative nature renders them very computationally inefficient during the sampling process. For the multi-class conditional generation problem, we propose a novel, structurally unique framework of diffusion models which are hierarchically branched according to the inherent relationships between classes. In this work, we demonstrate that branched diffusion models offer major improvements in efficiently generating samples from multiple classes. We also showcase several other advantages of branched diffusion models, including ease of extension to novel classes in a continual-learning setting, and a unique interpretability that offers insight into these generative models. Branched diffusion models represent an alternative paradigm to their traditional linear counterparts, and can have large impacts in how we use diffusion models for efficient generation, online learning, and scientific discovery.
translated by 谷歌翻译
We develop a Synthetic Fusion Pyramid Network (SPF-Net) with a scale-aware loss function design for accurate crowd counting. Existing crowd-counting methods assume that the training annotation points were accurate and thus ignore the fact that noisy annotations can lead to large model-learning bias and counting error, especially for counting highly dense crowds that appear far away. To the best of our knowledge, this work is the first to properly handle such noise at multiple scales in end-to-end loss design and thus push the crowd counting state-of-the-art. We model the noise of crowd annotation points as a Gaussian and derive the crowd probability density map from the input image. We then approximate the joint distribution of crowd density maps with the full covariance of multiple scales and derive a low-rank approximation for tractability and efficient implementation. The derived scale-aware loss function is used to train the SPF-Net. We show that it outperforms various loss functions on four public datasets: UCF-QNRF, UCF CC 50, NWPU and ShanghaiTech A-B datasets. The proposed SPF-Net can accurately predict the locations of people in the crowd, despite training on noisy training annotations.
translated by 谷歌翻译
The increased importance of mobile photography created a need for fast and performant RAW image processing pipelines capable of producing good visual results in spite of the mobile camera sensor limitations. While deep learning-based approaches can efficiently solve this problem, their computational requirements usually remain too large for high-resolution on-device image processing. To address this limitation, we propose a novel PyNET-V2 Mobile CNN architecture designed specifically for edge devices, being able to process RAW 12MP photos directly on mobile phones under 1.5 second and producing high perceptual photo quality. To train and to evaluate the performance of the proposed solution, we use the real-world Fujifilm UltraISP dataset consisting on thousands of RAW-RGB image pairs captured with a professional medium-format 102MP Fujifilm camera and a popular Sony mobile camera sensor. The results demonstrate that the PyNET-V2 Mobile model can substantially surpass the quality of tradition ISP pipelines, while outperforming the previously introduced neural network-based solutions designed for fast image processing. Furthermore, we show that the proposed architecture is also compatible with the latest mobile AI accelerators such as NPUs or APUs that can be used to further reduce the latency of the model to as little as 0.5 second. The dataset, code and pre-trained models used in this paper are available on the project website: https://github.com/gmalivenko/PyNET-v2
translated by 谷歌翻译
A new and efficient neural-network and finite-difference hybrid method is developed for solving Poisson equation in a regular domain with jump discontinuities on embedded irregular interfaces. Since the solution has low regularity across the interface, when applying finite difference discretization to this problem, an additional treatment accounting for the jump discontinuities must be employed. Here, we aim to elevate such an extra effort to ease our implementation by machine learning methodology. The key idea is to decompose the solution into singular and regular parts. The neural network learning machinery incorporating the given jump conditions finds the singular solution, while the standard finite difference method is used to obtain the regular solution with associated boundary conditions. Regardless of the interface geometry, these two tasks only require supervised learning for function approximation and a fast direct solver for Poisson equation, making the hybrid method easy to implement and efficient. The two- and three-dimensional numerical results show that the present hybrid method preserves second-order accuracy for the solution and its derivatives, and it is comparable with the traditional immersed interface method in the literature. As an application, we solve the Stokes equations with singular forces to demonstrate the robustness of the present method.
translated by 谷歌翻译
为了以低成本的自动驾驶成本实现准确的3D对象检测,已经提出了许多多摄像机方法并解决了单眼方法的闭塞问题。但是,由于缺乏准确的估计深度,现有的多摄像机方法通常会沿着深度方向产生多个边界框,例如行人等困难的小物体,从而产生极低的召回。此外,将深度预测模块直接应用于通常由大型网络体系结构组成的现有多摄像机方法,无法满足自动驾驶应用程序的实时要求。为了解决这些问题,我们提出了3D对象检测的跨视图和深度引导的变压器,CrossDTR。首先,我们的轻质深度预测器旨在生成精确的对象稀疏深度图和低维深度嵌入,而在监督过程中,无需额外的深度数据集。其次,开发了一个跨视图引导的变压器,以融合深度嵌入以及来自不同视图的相机的图像特征并生成3D边界框。广泛的实验表明,我们的方法在行人检测中大大超过了10%,总体图和NDS指标中约为3%。同样,计算分析表明,我们的方法比以前的方法快5倍。我们的代码将在https://github.com/sty61010/crossdtr上公开提供。
translated by 谷歌翻译
高度期望可以通过视觉信号执行复杂任务并与人合作执行复杂任务的空间AI。为了实现这一目标,我们需要一个视觉大满贯,该猛击很容易适应新场景而无需预训练,并为实时的下游任务生成密集的地图。由于其组件的固有局限性,先前基于学习和非学习的视觉大满贯都不满足所有需求。在这项工作中,我们开发了一个名为Orbeez-Slam的视觉猛烈抨击,该作品成功地与隐式神经表示(NERF)和视觉探测仪合作以实现我们的目标。此外,Orbeez-Slam可以与单眼相机一起使用,因为它只需要RGB输入,从而广泛适用于现实世界。我们验证其对各种具有挑战性的基准的有效性。结果表明,我们的大满贯速度比强大的渲染结果快800倍。
translated by 谷歌翻译
神经辐射场(NERF)的最新进展实现了最新的新型视图合成,并促进了场景特性的密集估计。但是,在非常稀疏的视图下捕获的大型无界场景通常会失败,而场景内容集中在远离相机的情况下,这是典型的现场机器人应用程序。特别是,NERF风格的算法的性能很差:(1)当视图不足而呈姿势多样性的情况不足时,(2)当场景包含饱和度和阴影时,以及(3)当对具有精细结构的大型无界场景进行精心采样时,计算中就会大量强度。本文提出了克隆器,它通过允许从稀疏输入传感器视图中观察到的大型户外驾驶场景来对NERF进行显着改善。这是通过将NERF框架内的占用和颜色学习分离成分别使用LIDAR和相机数据训练的单独的多层感知器(MLP)来实现的。此外,本文提出了一种新的方法,可以在NERF模型旁边构建可区分的3D占用网格图(OGM),并利用此占用网格来改进沿射线的点采样,以在度量空间中进行体积渲染。通过在Kitti数据集的场景上进行的广泛定量和定性实验,本文表明,在新的视图合成和密集的深度预测任务上对稀疏输入数据培训时,所提出的方法在新型视图合成和密集的深度预测任务上都优于最先进的NERF模型。
translated by 谷歌翻译